133 research outputs found

    Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations

    Full text link
    Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (i) citation behavior varies among fields of science and therefore leads to systematic differences, and (ii) there are no statistics to inform us whether differences are significant. The recently introduced SNIP indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-fold difference between their impact factors (2.793 and 13.156, respectively)

    Normalization at the field level: fractional counting of citations

    Full text link
    Van Raan et al. (2010; arXiv:1003.2113) have proposed a new indicator (MNCS) for field normalization. Since field normalization is also used in the Leiden Rankings of universities, we elaborate our critique of journal normalization in Opthof & Leydesdorff (2010; arXiv:1002.2769) in this rejoinder concerning field normalization. Fractional citation counting thoroughly solves the issue of normalization for differences in citation behavior among fields. This indicator can also be used to obtain a normalized impact factor

    Remaining problems with the "New Crown Indicator" (MNCS) of the CWTS

    Full text link
    In their article, entitled "Towards a new crown indicator: some theoretical considerations," Waltman et al. (2010; at arXiv:1003.2167) show that the "old crown indicator" of CWTS in Leiden was mathematically inconsistent and that one should move to the normalization as applied in the "new crown indicator." Although we now agree about the statistical normalization, the "new crown indicator" inherits the scientometric problems of the "old" one in treating subject categories of journals as a standard for normalizing differences in citation behavior among fields of science. We further note that the "mean" is not a proper statistics for measuring differences among skewed distributions. Without changing the acronym of "MNCS," one could define the "Median Normalized Citation Score." This would relate the new crown indicator directly to the percentile approach that is, for example, used in the Science and Engineering Indicators of US National Science Board (2010). The median is by definition equal to the 50th percentile. The indicator can thus easily be extended with the 1% (= 99th percentile) most highly-cited papers (Bornmann et al., in press). The seeming disadvantage of having to use non-parametric statistics is more than compensated by possible gains in the precision

    Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance

    Full text link
    The Center for Science and Technology Studies at Leiden University advocates the use of specific normalizations for assessing research performance with reference to a world average. The Journal Citation Score (JCS) and Field Citation Score (FCS) are averaged for the research group or individual researcher under study, and then these values are used as denominators of the (mean) Citations per publication (CPP). Thus, this normalization is based on dividing two averages. This procedure only generates a legitimate indicator in the case of underlying normal distributions. Given the skewed distributions under study, one should average the observed versus expected values which are to be divided first for each publication. We show the effects of the Leiden normalization for a recent evaluation where we happened to have access to the underlying data

    Differences in citation frequency of clinical and basic science papers in cardiovascular research

    Get PDF
    In this article, a critical analysis is performed on differences in citation frequency of basic and clinical cardiovascular papers. It appears that the latter papers are cited at about 40% higher frequency. The differences between the largest number of citations of the most cited papers are even larger. It is also demonstrated that the groups of clinical and basic cardiovascular papers are also heterogeneous concerning citation frequency. It is concluded that none of the existing citation indicators appreciates these differences. At this moment these indicators should not be used for quality assessment of individual scientists and scientific niches with small numbers of scientists
    • ā€¦
    corecore